Search Results: "turbo"

22 November 2011

Thorsten Glaser: Those small nice tools we all write

This is both a release announcement for the next installment of The MirBSD Korn Shell, mksh R40b, and a follow-up to Sune s article about small tools of various degrees of usefulness. I hope I don t need to say too much about the first part; mksh(1) is packaged in a gazillion of operating environments (dear Planet readers, that of course includes Debian, which occasionally gets a development snapshot; I ll wait uploading R40c until that two month fixed gcc bug will finally find its way into the packages for armel and armhf. Ah, we re getting Arch Linux (after years) to include mksh now. (Probably because they couldn t stand the teasing that Arch Hurd included it one day after having been told about its existence, wondering why it built without needing patches on Hurd ) MSYS is a supposedly supported target now, people are working on WinAPI and DJGPP in their spare time, and Cygwin and Debian packagers have deprecated pdksh in favour of mksh (thanks!). So, everything looking well on that front. I ve started a collection of shell snippets some time ago, where most of those small things of mine ends up. Even stuff I write at work we re an Open Source company and can generally publish under (currently) AGPLv3 or (if extending existing code) that code s licence. I chose git as SCM in that FusionForge instance so that people would hopefully use it and contribute to it without fear, as it s hosted on my current money source s servers. (Can just clone it.) Feel free to register and ask for membership, to extend it (only if your shell-fu is up to the task, KNOPPIX-style scripts would be a bad style(9) example as the primary goal of the project is to give good examples to people who learn shell coding by looking at other peoples code). Maybe you like my editor, too? At OpenRheinRuhr, the Atari people sure liked it as it uses WordStar like key combinations, standardised across a lot of platforms and vendors (DR DOS Editor, Turbo Pascal, Borland C++ for Windows, ) ObPromise: a posting to raise the level of ferrophility on the Planet aggregators this wlog reaches (got pix)

22 June 2011

Craig Small: First Look at Python

When the programming language Python came out while it seemed like an interesting idea, I didn t really see the point. I already know Perl, PHP and C (plus a few others) why learn another language? So until recently, I didn t Recent frustrations with PHP and how loose it is with things like variables made me have another look at Python and what it could do for me. As just learning a language by itself it pretty boring I decided to write some small programs in the language and used the framework Turbogears to do it. First of all, learning the basics of python from perl or PHP is dead simple. The idea of what a language looks like is not too different between these three so it was a snap to write simple stuff in python. So, what does yet-another language give me? It s not one-way though, there have been some challenges. Quite likely these are more my deficiencies than the languages. Despite the minor problems, python really does give you much much more than PHP and some more than Perl. For me and what I am intending on writing it is worth learning yet another language.

2 June 2011

Craig Small: What happens without software testing

New TurboGears logo

Image via Wikipedia

Well jffnms 0.9.0 was a, well, its a good example of what can go wrong without adequate testing. Having it written in PHP makes it difficult to test, because you can have entire globs of code that are completely wrong but are not activated (because the if statement proves false for example) and you won t get an error. It also had some database migration problems. This is the most difficult and annoying part of releasing versions of JFFNMS. There are so many things to check, like: I ve been looking at sqlalchemy which is part of turbogears. It s a pretty impressive setup and lets you get on with writing your application and not mucking around with the low-level stuff. It s a bit of a steep learning curve learning python AND sqlalchemy AND turbogears but I ve got some rudimentary code running ok and its certainly worth it (python erroring on un-assigned varables but not forcing you to define them is a great compromise). The best thing is that you can develop on a sqlite database but deploy using mysql or postgresql with a minor change. Python and turbogears both emphasise automatic testing. Ideally the testing should cover all the code you write. The authors even suggest you write the test first then implement the feature. After chasing down several bugs, some of which I introduced fixing other bugs, automatic testing would make my life a lot easier and perhaps I wouldn t dread the release cycle so often.

25 April 2011

Mirco Bauer: Amazon MP3 Downloader and 64-bit Linux

Last night I wanted to buy an album on Amazon and I couldn't do the checkout as the site required me to install the Amazon MP3 Downloader to make the purchase and download of the album. The downloader is not needed for a single song though, but buying each song separately would be more expensive and more work. The good news is that it automatically offered me packages for different Linux distros: Ubuntu, Debian, Fedora and OpenSUSE instead of telling me off for using Linux and leaving me behind with a Windows only download. But here comes the catch, all offered packages are only for the Intel 32-bit architecture. Now this is a showstopper for me, as I am running an AMD64 Debian which is a 64-bit architecture. I first tried to download and run the 32-bit debian package nonetheless as there was some hope with the ia32-libs and ia32-libs-gtk package. But this was not working as the application needs gtkmm libraries like libglademm and bailed when I tried to run it. So I filed a wishlist bug report against ia32-libs-gtk for inclusion of gtkmm and possibly other needed libraries to run the Amazon MP3 downloader on AMD64. So I bought the album using MusicLoad instead which simply puts all songs in a single archive on-the-fly and let me download that archive. When I tweeted my frustration on Twitter I was hinted by Jo Shields and also by Gabriel Burt that there would have been a much simpler solution to this issue by using Banshee which includes an Amazon MP3 Store plugin: Banshee screenshot showing the Amazon MP3 Store plugin This plugin allows you to log into your Amazon account browse their store like the regular Amazon store, plays the song samples directly, purchase songs, downloads the songs and imports them into Banshee's database so you can play them right away. And as if this wasn't good enough yet, with the the purchase of MP3 songs on Amazon using Banshee automatically makes a donation to the GNOME foundation. As I am the only one who forgot or wasn't aware of this awesome solution this deserved some blogging. Update: some people pointed out that clamz is also available to make MP3 purchases on Amazon.

3 January 2011

Mike Hommey: The measured effect of I/O on application startup

I did some experiments with the small tool I wrote in previous post in order to gather some startup data about Firefox. It turns out it can t flush directories and other metadata from the page cache, making it unfortunately useless for what I m most interested in. So, I gathered various startup information about Firefox, showing how page cache (thus I/O) has a huge influence on startup. The data in this post are mean times and 95% confidence interval for 50 startups with an existing but fresh profile, in a mono-processor GNU/Linux x86-64 virtual machine (using kvm) with 1GB RAM and a 10GB raw hard drive partition over USB, running, except when said otherwise, on an i7 870 (up to 3.6GHz with Intel Turbo Boost). The Operating System itself is an up-to-date Debian Squeeze running the default GNOME environment. Firefox startup time is measured as the difference between the time in ms right before starting Firefox and time in ms as returned by javascript in a data:text/html page used as home page. Startup vs. page cache
Average startup time (ms)
Entirely cold cache (drop_caches) 5887.62 0.88%
Cold cache after boot 3382.0 0.51%
Selectively cold cache (see below) 2911.46 0.48%
Hot cache (everything previously in memory) 250.74 0.18%
The selectively cold cache case makes use of the flush program from previous post and a systemtap script used to get the list of files read during startup. This script will be described in a separate post. As you can see, profiling startup after echo 3 > /proc/sys/vm/drop_caches takes significantly more time than in the normal conditions users would experience, because of all system libraries that would normally be in the page cache being flushed, hence biasing the view one can have of the actual startup performance. Mozilla build bots were running, until recently, a ts_cold startup test that, as I understand it, had this bias (which is part of why it was stopped). The Hot cache value is also interesting because it shows that the vast majority of cold startup time is due to hard disk I/O (and no, there is no page faults number difference). I/O vs CPU Interestingly, testing on a less beefy machine (Core 2 Duo 2.2GHz) with the same USB disk and kvm setup shows something not entirely intuitive:
Average (ms)
Entirely cold cache 6748.42 1.01%
Cold cache after boot 3973.64 0.53%
Selectively cold cache 3445.7 0.43%
Hot cache 570.58 0.70%
I, for one, would have expected I/O bound startups to only be slower by around 320ms, which is roughly the hot cache startup difference, or, in other words, the CPU bound startup difference. But I figured I was forgetting an important factor. I/O vs. CPU scaling Modern processors do frequency scaling, which allows the processor to run slowly when underused, and faster when used, thus saving power. It was first used on laptop processors to reduce power drawn from the battery, allowing batteries to last longer, and is now used on desktop processors to reduce power consumption. It unfortunately has a drawback, in that it introduces some latency when the scaling kicks in. A not so nice side effect of frequency scaling is that when a process is waiting for I/O, the CPU is underused, making the CPU usually run at its slowest frequency. When the I/O ends and the process runs again, the CPU can go back to full speed. This means every I/O can induce, on top of latency because of, e.g. disk seeks, CPU scaling latency. And it actually has much more impact than I would have thought. Here are results on the same Core 2 Duo, with frequency scaling disabled, and the CPU forced to its top speed :
Average (ms)
Entirely cold cache 5824.1 1.13%
Cold cache after boot 3441.8 0.43%
Selectively cold cache 3025.72 0.29%
Hot cache 576.88 0.98%
(I would have liked to do the same on the i7, but Intel Turbo Boost complicates things and I would have needed to get two new sets of data) Update: I actually found a way to force one core at its max frequency and run kvm processes on it, giving the following results:
Average (ms)
Entirely cold cache 5395.94 0.83%
Cold cache after boot 3087.47 0.31%
Selectively cold cache 2673.64 0.21%
Hot cache 258.52 0.35%
I haven t gathered enough data to have accurate figures, but it also seems that forcing the CPU frequency to a fraction of the fastest supported frequency gives the intuitive results where the difference between all I/O bound startup times is the same as the difference for hot cache startup times. As such, I/O bound startup improvements would be best measured as an improvement in the difference between cold and hot cache startup times, i.e. (cold2 - hot2) - (cold1 - hot1), at a fixed CPU frequency. Startup vs. desktop environment We saw above that the amount of system libraries in page cache directly influences application startup times. And not all GNU/Linux systems are made equal. While the above times were obtained under a GNOME environment, some other desktop environments don t use the same base libraries, which can make Firefox require to load more of them at cold startup. The most used environment besides GNOME is KDE, and here is what cold startup looks like under KDE:
Average startup time (ms)
GNOME cold cache 3382.0 0.51%
KDE cold cache 4031.9 0.48%
It s significantly slower, yet not as slow as the entirely cold cache case. This is due to KDE not using (thus not pre-loading) some of the GNOME core libraries, yet using libraries in common, like e.g. libc (obviously), or dbus.

1 November 2010

Michal Čihař: Cleaning up the web

After recent switch of this blog to Django, I've also planned to switch rest of my website to use same engine. Most of the code is already written, however I've found some forgotten parts, which nobody used for really a long time. As most of the things there is free software, I don't like idea removing it completely, however it is quite unlikely that somebody will want to use these ancient things. Especially when it quite lacks documentation and I really forgot about that (most of them being Turbo Pascal things for DOS, Delphi components for Windows and so on). Anyway it will probably live only on dl.cihar.com in case anybody is interested.

Filed under: Django Website 0 comments Flattr this

24 July 2010

Andrew Pollock: [geek] Cleaning up from 20 years ago

I'm a terrible hoarder. I hang onto old stuff because I think it might be fun to have a look at again later, when I've got nothing to do. The problem is, I never have nothing to do, or when I do, I never think to go through the stuff I've hoarded. As time goes by, the technology becomes more and more obsolete to the point where it becomes impractical to look at it. Today's example: the 3.5" floppy disk. I've got a disk holder thingy with floppies in it dating back to the mid-nineties and earlier. Stuff from high school, which I thought might be a good for a giggle to look at again some time. In the spirit of recording stuff before I throw it out, I present the floppy disks I'm immediately tossing out.
MS-DOS 6.2 and 6.22
Ah the DOS days. I remember excitedly looking forward to new versions of MS-DOS to see what new features they brought. I remember DOS 5.0 being the revolutionary one. The dir command grew a ton of options.
XTreeGold
More from the DOS days, when file management was such a pain in the arse that there was a business model to do it better. ytree seems like a fairly good looking clone of it for Linux.
WinZip for Windows 95, Windows NT and Windows 3.1
Ha. I actually paid money for an official WinZip floppy disk.
Nissan Maxima Electronic Brochure
I'm amazed this fit on a floppy disk
Turbo Pascal 6.0
Excluding GW-BASIC, this was the first "real" language I dabbled in. I learned it in Information Processing & Technology in grades 11 and 12. I never got into the OO stuff that version 6.0 was particularly geared towards.
Where in the World is Carmen Sandiego?
Awesome educational game. I was first introduced to this on the Apple ][, and loved it. This deserves being resurrected for a console.
Captain Comic II
Good sequel to the original, but I never found a version that worked properly (I could never convince it to let me finish it)
HDM IV
Ah, Hard Disk Menu. A necessity from the DOS days when booting up to a C:\> prompt just really didn't cut it. I used to love customising this thing.
ARJ, LHA, PK-ZIP
Of course, you needed a bazillion different decompression programs back in the days of file trading. I guess things haven't changed much with Linux. There's gzip, bzip2, 7zip, etc.
Zeliard
I wasted so many hours playing this. The ending was so hard.
MicroSQL
This was some locally produced software from Brisbane, written in Turbo Pascal (I think). It was a good introduction to SQL, I used it in high school and my first stab at University.
DOOM and DOOM II
Classics. I don't seem to have media for it any more, but I also enjoyed playing Heretic and Hexen. Oooh, Hexen has been ported to Linux? Must check that out...
SimCity 2000
I wasn't a big fan of this game, but I liked the isometric view that 2000 had, compared to the previous version.

15 July 2010

Enrico Zini: On python, frameworks and TOOWTDI

On python, frameworks and TOOWTDI The Python world is ridden with frameworks, microframeworks, metaframeworks and their likes. They are often very clever things, but more often than not they are a tool of despair. A very peculiar thing about Python web frameworkish things is that there are so many of them. There's cherrypy (in its various API redesigns), fapws, gunicorn, bottle and flask, paste, werkzeug and flup, tornado, pylons, turbogears 1 and 2, django, repoze who, what and whatnot, all the myriad of rendering engines and buffet as a metathing on top of them, diesel, twisted, and I apologise if I don't spend my day listing and hyperlinking them all, I hope I made my point. Frameworks are supposed to standardise some aspects of programming; the nice thing about standards is that there are so many of them to choose from, and they all suck, so I'll make my own. But wasn't Python supposed to be the world of TIOOWTDI? Ok, everbody knows it isn't. Just in the standard library there are 2 implementations of pickle and 2 urllibs. But people like the TIOOWTDI idea. I believe the reason people like the TIOOWTDI idea is because it creates a framework. It standardises some aspects of programming, and defines building blocks that guarantee that people doing similar jobs will be using similar sets of components. Let's take for example the datetime module in the standard library. It is an embarassing example of a badly designed module, so embarassing that the standard library documentation continuously fails to document its fundamental design flaws and common work-arounds hoping that noone notices them, but as a consequence each poor soul starting to use it for nontrivial things has to google for hours in despair to rediscover in how many ways it's broken. But still, datetime works as a structure to hold those values that make a date, time, or full UTC timestamp. For that job it's become the standard, and as such it's an important component of the Python TIOOWTDI framework: one can use it to exchange datetimes among different libraries: for example ORMs are using it instead of rolling their own, which makes database programming so much easier when date/time is involved. Even if the implementation is far from perfect, once we apply TIOOWTDI to dates and timestamps, python code from different authors can exchange dates without worries. This is much better than having 3 different superior datetime libraries and having to convert date objects from one to another when passing values from a web form to an ORM. There is an often overlooked Python framework. The Pyton framework. It's called TIOOWTDI. All the micro-mini-midi-maxi-meta-frameworks that people scatter around, are, or should be, just experiments, proofs of concept, competing ideas waiting to be distilled in The Only One Way, bringing the Python experience one step forward. What is unfortunate is that this last distillation thing happens so rarely that people get used to the idea of having to use proof of concept code to get things done. Update: this post apparently wasn't very clear, so here is some clarification:

15 April 2010

Clint Adams: DebConf Sponsorship Application Deadline Today. Register Now.

H&R Block is like a KFC for your chicken. No, not really.

28 March 2010

Stefano Zacchiroli: RC bugs of the week - issue 25

RCBW - #25 Last (long) week has finally passed: first in Geneve visiting Gismo, then in Sierre/Crans-Montana to present a paper at SAC 2010, then back in Paris for half a day before a day in Bruxelles for the 2nd year Mancoosi project review. (And of course campaigning has always been going on ...) All went fine and I can finally relax a bit for a week-end back "home" (i.e. Paris). Regarding RCBW, it looks like that in the past 2 months I've stabilized around 3 issues per month, which makes me happy nevertheless. Without any further ado, here are this weekissue's squashes: Random points:

9 February 2010

Aigars Mahinovs: Debug and optimization do NOT mix!

This has robbed me of several days of my life, so I want to bring Google juice this this problem. IF you have a Pylons or TurboGears application or anything else that uses the fantastic EvalException WSGI middleware for web debugging of you web program and have the following symptom: * on a crash the traceback page shows up, but it has no css style and no images
* http://127.0.0.1:5000/_debug/media/plus.jpg returns a 404 (where 127.0.0.1:5000 is the path of your application) with Nothing could be found to match _debug The problem that you are facing in this case is the enviroment variable PYTHONOPTIMIZE or apache_wsgi option WSGIPythonOptimize. Unset them both or you debug environment will not work!

29 November 2009

Aigars Mahinovs: New hardware planning

My primary workstation is a 3 and a half year old Dell XPS M1710 laptop and it is getting old the 320 Gb hard drive is getting small and slow, the 3 Gb or RAM (expanded from 2 Gb) look too small and screen is turning brown in one corner. Also dead or dying: keyboard (after cat+coffee incident), built-in speakers, battery, power adapter (twice replaced), DVD writer (rads but does not write any more) and fans (one replaced, one getting louder by the week). Also the video card is a bit slow for nowadays needs. So I set aside some money and started considering the options. In short, I could get another laptop OR get a regular computer for the same price but with twice the power. After considering all the laptop options up to 1000LVL, I decided to get a regular computer and if/when I ll need to travel I ll either take one of the old laptops with me or get a new netbook. Now comes the fun part choosing the components! So far my choices are as follows: And that is it a great setup that will last me a few years for around 700 LVL and it will have more partial upgradability than my laptop that I paid twice as much 3 years ago. Any tips? Did I miss anything? Is it likely that anything in the above list will cause me any trouble in latest Ubuntu/Debian installations? Update: got all of the above, except 500 Gb HDD of the same kind (faster, cooler, but higher price per Gb) and no Blueray (no drive in LV at the moment and did not want to wait a few weeks). The results are good, but I wish I would have gone with a 5 LVL more expensive video card that has a quieter cooler. Now I am buying a separate video card cooler for around 20 LVL that will be both more efficient and much quieter than the stock cooler I have Zalman VF1000 is the model. Has not arrived yet. P.S. It looks like updating a post in WP, automatically bumps it back into the Planet feed. Sorry :(

4 November 2009

Enrico Zini: Custom function decorators with TurboGears 2

Custom function decorators with TurboGears 2 I am exposing some library functions using a TurboGears2 controller (see web-api-with-turbogears2). It turns out that some functions return a dict, some a list, some a string, and TurboGears 2 only allows JSON serialisation for dicts. A simple work-around for this is to wrap the function result into a dict, something like this:
@expose("json")
@validate(validator_dispatcher, error_handler=api_validation_error)
def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
    # Call API
    res = self.engine.list_colours(filter, productID, maxResults)
    # Return result
    return dict(r=res)
It would be nice, however, to have an @webapi() decorator that automatically wraps the function result with the dict:
def webapi(func):
    def dict_wrap(*args, **kw):
        return dict(r=func(*args, **kw))
    return dict_wrap
# ...in the controller...
    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    @webapi
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
This works, as long as @webapi appears last in the list of decorators. This is because if it appears last it will be the first to wrap the function, and so it will not interfere with the tg.decorators machinery. Would it be possible to create a decorator that can be put anywhere among the decorator list? Yes, it is possible but tricky, and it gives me the feeling that it may break in any future version of TurboGears:
class webapi(object):
    def __call__(self, func):
        def dict_wrap(*args, **kw):
            return dict(r=func(*args, **kw))
        # Migrate the decoration attribute to our new function
        if hasattr(func, 'decoration'):
            dict_wrap.decoration = func.decoration
            dict_wrap.decoration.controller = dict_wrap
            delattr(func, 'decoration')
        return dict_wrap
# ...in the controller...
    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    @webapi
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
As a convenience, TurboGears 2 offers, in the decorators module, a way to build decorator "hooks":
class before_validate(_hook_decorator):
    '''A list of callables to be run before validation is performed'''
    hook_name = 'before_validate'
class before_call(_hook_decorator):
    '''A list of callables to be run before the controller method is called'''
    hook_name = 'before_call'
class before_render(_hook_decorator):
    '''A list of callables to be run before the template is rendered'''
    hook_name = 'before_render'
class after_render(_hook_decorator):
    '''A list of callables to be run after the template is rendered.

    Will be run before it is returned returned up the WSGI stack'''
    hook_name = 'after_render'
The way these are invoked can be found in the _perform_call function in tg/controllers.py. To show an example use of those hooks, let's add a some polygen wisdom to every data structure we return:
class wisdom(decorators.before_render):
    def __init__(self, grammar):
        super(wisdom, self).__init__(self.add_wisdom)
        self.grammar = grammar
    def add_wisdom(self, remainder, params, output):
        from subprocess import Popen, PIPE
        output["wisdom"] = Popen(["polyrun", self.grammar], stdout=PIPE).communicate()[0]
# ...in the controller...
    @wisdom("genius")
    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
These hooks cannot however be used for what I need, that is, to wrap the result inside a dict. The reason is because they are called in this way:
        controller.decoration.run_hooks(
                'before_render', remainder, params, output)
and not in this way:
        output = controller.decoration.run_hooks(
                'before_render', remainder, params, output)
So it is possible to modify the output (if it is a mutable structure) but not to exchange it with something else. Can we do even better? Sure we can. We can assimilate @expose and @validate inside @webapi to avoid repeating those same many decorator lines over and over again:
class webapi(object):
    def __init__(self, error_handler = None):
        self.error_handler = error_handler
    def __call__(self, func):
        def dict_wrap(*args, **kw):
            return dict(r=func(*args, **kw))
        res = expose("json")(dict_wrap)
        res = validate(validator_dispatcher, error_handler=self.error_handler)(res)
        return res
# ...in the controller...
    @expose("json")
    def api_validation_error(self, **kw):
        pylons.response.status = "400 Error"
        return dict(e="validation error on input fields", form_errors=pylons.c.form_errors)
    @webapi(error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
This got rid of @expose and @validate, and provides almost all the default values that I need. Unfortunately I could not find out how to access api_validation_error from the decorator so that I can pass it to the validator, therefore I remain with the inconvenience of having to explicitly pass it every time.

15 October 2009

Enrico Zini: Building a web-based API with Turbogears2

Building a web-based API with Turbogears2 I am using TurboGears2 to export a python API over the web. Every API method is wrapper by a controller method that validates the parameters and returns the results encoded in JSON. The basic idea is this:
@expose("json")
def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
    # Call API
    res = self.engine.list_colours(filter, productID, maxResults)
    # Return result
    return res
To validate the parameters we can use forms, it's their job after all:
class ListColoursForm(TableForm):
    fields = [
            # One field per parameter
            twf.TextField("filter", help_text="Please enter the string to use as a filter"),
            twf.TextField("productID", help_text="Please enter the product ID"),
            twf.TextField("maxResults", validator=twfv.Int(min=0), default=200, size=5, help_text="Please enter the maximum number of results"),
    ]
list_colours_form=ListColoursForm()
#...
    @expose("json")
    @validate(list_colours_form, error_handler=list_colours_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Parameter validation is done by the form
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
All straightforward so far. However, this means that we need two exposed methods for every API call: one for the API call and one error handler. For every API call, we have to type the name several times, which is error prone and risks to get things mixed up. We can however have a single error handler for all methonds:
def get_method():
    '''
    The method name is the first url component after the controller name that
    does not start with 'test'
    '''
    found_controller = False
    for name in pylons.c.url.split("/"):
        if not found_controller and name == "controllername":
            found_controller = True
            continue
        if name.startswith("test"):
            continue
        if found_controller:
            return name
    return None
class ValidatorDispatcher:
    '''
    Validate using the right form according to the value of the "method" field
    '''
    def validate(self, args, state):
        method = args.get("method", None)
    # Extract the method from the URL if it is missing
        if method is None:
            method = get_method()
            args["method"] = method
        return forms[method].validate(args, state)
validator_dispatcher = ValidatorDispatcher()
This validator will try to find the method name, either as a form field or by parsing the URL. It will then use the method name to find the form to use for validation, and pass control to the validate method of that form. We then need to add an extra "method" field to our forms, and arrange the forms inside a dictionary:
class ListColoursForm(TableForm):
    fields = [
            # One hidden field to have a place for the method name
            twf.HiddenField("method")
            # One field per parameter
            twf.TextField("filter", help_text="Please enter the string to use as a filter"),
    #...
forms["list_colours"] = ListColoursForm()
And now our methods become much nicer to write:
    @expose("json")
    def api_validation_error(self, **kw):
        pylons.response.status = "400 Error"
        return dict(form_errors=pylons.c.form_errors)
    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Parameter validation is done by the form
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
        # Return result
        return res
api_validation_error is interesting: it returns a proper HTTP error status, and a JSON body with the details of the error, taken straight from the form validators. It took me a while to find out that the form errors are in pylons.c.form_errors (and for reference, the form values are in pylons.c.form_values). pylons.response is a WebOb Response that we can play with. So now our client side is able to call the API methods, and get a proper error if it calls them wrong. But now that we have the forms ready, it doesn't take much to display them in web pages as well:
def _describe(self, method):
    "Return a dict describing an API method"
    ldesc = getattr(self.engine, method).__doc__.strip()
    sdesc = ldesc.split("\n")[0]
    return dict(name=method, sdesc = sdesc, ldesc = ldesc)
@expose("myappserver.templates.myappapi")
def index(self):
    '''
    Show an index of exported API methods
    '''
    methods = dict()
    for m in forms.keys():
        methods[m] = self._describe(m)
    return dict(methods=methods)
@expose('myappserver.templates.testform')
def testform(self, method, **kw):
    '''
    Show a form with the parameters of an API method
    '''
    kw["method"] = method
    return dict(method=method, action="/myapp/test/"+method, value=kw, info=self._describe(method), form=forms[method])
@expose(content_type="text/plain")
@validate(validator_dispatcher, error_handler=testform)
def test(self, method, **kw):
    '''
    Run an API method and show its prettyprinted result
    '''
    res = getattr(self, str(method))(**kw)
    return pprint.pformat(res)
In a few lines, we have all we need: an index of the API methods (including their documentation taken from the docstrings!), and for each method a form to invoke it and a page to see the results. Make the forms children of AjaxForm, and you can even see the results together with the form.

19 September 2009

Stefano Zacchiroli: RC bugs of the week - week 3

RCBW - week #3 And this week we have a new player, welcome on board David! This week summary of mine follows
(As usual, dates are not strictly the day in which a bug has been dealt with, but rather the day of the week I assigned a bug to). Comments / ponderings of this week:

11 September 2009

Zak B. Elep: New Server

Site was down for almost a week, due to my provider being DDoSed. Email was broken more so as SMTP was the target, but it was rather odd that the data center decided to block outgoing SMTP connections from a block of IPs, where my VPS is unfortunately a member of. I've since switched to a new provider. This makes my third OpenVZ VPS, though I was meaning to move on to Xen. At 6 US dollars a month though, it seems quite a bargain, so I'll have to see where this goes. The move also afforded me to change some stuff under the hood. Apache is gone, replaced by nginx (although this required having blosxom running under a thttpd backend, until I can whip up a FastCGI equivalent.) BIND gave way to maradns, cutting down significant memory usage. And dropbear replaces the OpenSSH server. Exim 4 and dovecot remain though, as they seem to quite big (or small) enough for my purposes. I'll document more about the changes in later posts. In the meantime, I need to catch up on emails and packaging...

5 June 2009

Stefano Zacchiroli: turbogears 2 packaging - take 3

TurboGears 2 packaging - (another) status update Brief status update on TG2 packaging. As expected, this week has been very busy and I'm writing the update offline on a train, but nevertheless I've good news to share: ... the last item above would theoretically mean that TG2 is already usable, but it is not the case due to the infamous #507909 which breaks both TG2 and TG1 in unstable (the latter was working in Lenny, and I had to resort to a Lenny chroot for a recent course I've taught about web development). In short, turbojson is in unstable, in spite of the lack of several of its dependencies, making impossible any import of it. ..., stay tuned, keep faith, bear with me, almost there, ... etc.

31 May 2009

Stefano Zacchiroli: turbogears 2 packaging - take 2

TurboGears 2 packaging - status update Brief status update about my TurboGears 2 packaging efforts, after some evening work of today: Here is my ex-virgin /usr/local/lib/python2.5/site-packages/ as it looks now:
    zack@usha:~$ ls /usr/local/lib/python2.5/site-packages/ -1
    AddOns-0.6-py2.5.egg
    BytecodeAssembler-0.3-py2.5.egg
    Catwalk-2.0.2-py2.5.egg
    easy-install.pth
    Extremes-1.1-py2.5.egg
    PEAK_Rules-0.5a1.dev_r2582-py2.5.egg
    prioritized_methods-0.2.1-py2.5.egg
    sprox-0.5.5-py2.5.egg
    SymbolType-1.0-py2.5.egg
    tg.devtools-2.0-py2.5.egg
    tgext.admin-0.2.4-py2.5.egg
    tgext.crud-0.2.4-py2.5.egg
    TurboGears2-2.0-py2.5.egg
    tw.forms-0.9.3-py2.5.egg
    zope.sqlalchemy-0.4-py2.5.egg

At this point, it should be already possible to install the 2 main eggs composing TurboGears 2 (tg.devtools and TurboGears2 itself) relying only on Debian-package dependencies. ... modulo some "details": This evening's motto: ci siamo quasi. Next week I'll be pretty busy, but I guess some night work (or some of you, fellow readers :-) ) will come to the rescue.

30 May 2009

Stefano Zacchiroli: kick-starting turbogears 2 packaging

TurboGears 2 in debian ... soon ! After a long incubation, a few days ago TurboGears 2 has been released. Historically, I've been preferring TurboGears over Django for being closer to the open source philosophy of reusing existing components. Since the long 2.0 release was marking a gap with Django, I was eager to test the 2.0, and I was delighted to find it in Debian. Unfortunately, it doesn't seem to be in good shape yet. In particular it lacks several dependencies before it can even be used to quickstart a project, in spite of its presence in unstable. To give an idea of the needed work, after having installed it manually (via easy_install), my virgin /usr/local/lib/python2.5/site-packages/ has been polluted by 26 egg-thingies. I decided to spend some week-end time to start closing the gap, because we cannot lack such an important web framework in Debian (and ... erm, yes, also because I need it :-) ). Here is the current status of what has been ITP-ed / packaged already (by yours truly): Where to Considering, foolishly, the last package as being already done, the way to go is still long:
    zack@usha:~$ ls /usr/local/lib/python2.5/site-packages/ grep -v repoze.what
    AddOns-0.6-py2.5.egg
    BytecodeAssembler-0.3-py2.5.egg
    Catwalk-2.0.2-py2.5.egg
    easy-install.pth
    Extremes-1.1-py2.5.egg
    PEAK_Rules-0.5a1.dev_r2582-py2.5.egg
    prioritized_methods-0.2.1-py2.5.egg
    sprox-0.5.5-py2.5.egg
    sqlalchemy_migrate-0.5.2-py2.5.egg
    SymbolType-1.0-py2.5.egg
    tg.devtools-2.0-py2.5.egg
    tgext.admin-0.2.4-py2.5.egg
    tgext.crud-0.2.4-py2.5.egg
    TurboGears2-2.0-py2.5.egg
    tw.forms-0.9.3-py2.5.egg
    WebFlash-0.1a9-py2.5.egg
    zope.sqlalchemy-0.4-py2.5.egg

Possibly, some of them are already in Debian hidden somewhere, but sure there is still work to be done. If you want to help, you are more then welcome. The rules are simple: all packages will be maintained under the umbrella of the Python Modules Team, but you should be willing to take responsibility as the primary maintainer. Please get in touch with if you are interested, as I'm in turn already in touch with some other very kind volunteers (thanks Enrico and Federico2!) to coordinate who-is-doing-what. Preview packages available What has already been packaged, including a temporary workaround for python-transaction, will be available in experimental after NEW processing. The idea is that nothing will go to unstable until TurboGears 2 will be (proven to be) fully functional. In the meantime, packages are available from my personal APT repository (signed by my key), here are the friendly /etc/apt/sources.list :
    deb http://people.debian.org/~zack/debian zack-unstable/
    deb-src http://people.debian.org/~zack/debian zack-unstable/

Versions are tilde-friendly and shouldn't get in your way when the official packages will hit unstable. Packaging multiple-egg / multiple-upstream packages In all this, I've faced an interesting problem with the python-repoze. who,what -plugins packages. They correspond to a handful of plugins, each of which is about 20/30 Kb. I didn't consider appropriate to prepare 5 different packages, due to archive bloat potential. Hence, you get the usual problems of multiple-upstream packages. To counter some of them, as well as some egg-specific packaging annoyances with multiple upstream, I wrote a couple of very simple helpers: I'm no Python-packaging-guru so, lazyweb, if you spot in this choice something I utterly overlooked, or if you have improvements to suggest, please let me know. Repacking eggs An interesting problem will be faced in trying to integrate the above approach with Python modules that are shipped only as .egg files (i.e., no tarballs, ... yes there are such horrifying things out there). To preserve uniformity, we would need uscan to support --repack ing of .egg files as they were simple .zip archives. Since eggs are .zip archives ... why not?

16 May 2009

Martin Michlmayr: Automatic power on QNAP Turbo Station devices

All ARM based QNAP machines can turn on automatically when power is applied if the device was not powered down correctly. This is helpful when your power goes down. Follow the instructions below if you want to turn automatic power on using Debian on a QNAP TS-109, TS-119, TS-209, TS-219, TS-409 or TS-409U. Edit the file /etc/apt/sources.list and add the following line:
deb http://people.debian.org/~tbm/orion lenny main
Now install a new version of qcontrol:
apt-get update
apt-get install qcontrol
Finally, turn the automatic power feature on:
qcontrol -d &
qcontrol autopower on
kill %1
rm /var/run/qcontrol.sock

Next.

Previous.